Epoch 00000: val_loss improved from inf to 1.04548, saving model to ./checkpoints/weights.000-1.0455.hdf5
3072/3021 [==============================] - 85s - loss: 1.2478 - val_loss: 1.0455
Epoch 2/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.9460Epoch 00001: val_loss improved from 1.04548 to 0.80567, saving model to ./checkpoints/weights.001-0.8057.hdf5
3072/3021 [==============================] - 70s - loss: 0.9437 - val_loss: 0.8057
Epoch 3/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.7676Epoch 00002: val_loss did not improve
3047/3021 [==============================] - 71s - loss: 0.7734 - val_loss: 0.9774
Epoch 4/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.6906Epoch 00003: val_loss improved from 0.80567 to 0.63040, saving model to ./checkpoints/weights.003-0.6304.hdf5
3072/3021 [==============================] - 70s - loss: 0.6909 - val_loss: 0.6304
Epoch 5/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.5838Epoch 00004: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.5813 - val_loss: 0.7017
Epoch 6/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.5608Epoch 00005: val_loss improved from 0.63040 to 0.61503, saving model to ./checkpoints/weights.005-0.6150.hdf5
3047/3021 [==============================] - 69s - loss: 0.5596 - val_loss: 0.6150
Epoch 7/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.5016Epoch 00006: val_loss improved from 0.61503 to 0.53570, saving model to ./checkpoints/weights.006-0.5357.hdf5
3072/3021 [==============================] - 70s - loss: 0.4975 - val_loss: 0.5357
Epoch 8/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.4361Epoch 00007: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.4361 - val_loss: 0.5432
Epoch 9/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.4059Epoch 00008: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.4032 - val_loss: 0.5752
Epoch 10/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.3733Epoch 00009: val_loss improved from 0.53570 to 0.33159, saving model to ./checkpoints/weights.009-0.3316.hdf5
3072/3021 [==============================] - 70s - loss: 0.3744 - val_loss: 0.3316
Epoch 11/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.3254Epoch 00010: val_loss did not improve
3072/3021 [==============================] - 69s - loss: 0.3243 - val_loss: 0.4274
Epoch 12/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.3495Epoch 00011: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.3477 - val_loss: 0.4750
Epoch 13/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.3056Epoch 00012: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.3016 - val_loss: 0.5004
Epoch 14/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.2465Epoch 00013: val_loss improved from 0.33159 to 0.26357, saving model to ./checkpoints/weights.013-0.2636.hdf5
3072/3021 [==============================] - 70s - loss: 0.2496 - val_loss: 0.2636
Epoch 15/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.2556Epoch 00014: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.2575 - val_loss: 0.3512
Epoch 16/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.2091Epoch 00015: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.2092 - val_loss: 0.3096
Epoch 17/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.2256Epoch 00016: val_loss improved from 0.26357 to 0.22867, saving model to ./checkpoints/weights.016-0.2287.hdf5
3072/3021 [==============================] - 70s - loss: 0.2235 - val_loss: 0.2287
Epoch 18/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.2045Epoch 00017: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.2033 - val_loss: 0.2895
Epoch 19/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.1903Epoch 00018: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.1889 - val_loss: 0.2927
Epoch 20/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.1784Epoch 00019: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.1772 - val_loss: 0.3048
Epoch 21/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.1967Epoch 00020: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.1949 - val_loss: 0.3690
Epoch 22/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.1538Epoch 00021: val_loss did not improve
3072/3021 [==============================] - 71s - loss: 0.1568 - val_loss: 0.3551
Epoch 23/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.1695Epoch 00022: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.1685 - val_loss: 0.3090
Epoch 24/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.1290Epoch 00023: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.1279 - val_loss: 0.2879
Epoch 25/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.1346Epoch 00024: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.1344 - val_loss: 0.2958
Epoch 26/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.1212Epoch 00025: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.1207 - val_loss: 0.3149
Epoch 27/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.1540Epoch 00026: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.1538 - val_loss: 0.3790
Epoch 28/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.1096Epoch 00027: val_loss did not improve
Epoch 00027: reducing learning rate to 1.9999999494757503e-05.
3072/3021 [==============================] - 70s - loss: 0.1103 - val_loss: 0.2986
Epoch 29/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0657Epoch 00028: val_loss improved from 0.22867 to 0.20805, saving model to ./checkpoints/weights.028-0.2080.hdf5
3072/3021 [==============================] - 70s - loss: 0.0655 - val_loss: 0.2080
Epoch 30/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0542Epoch 00029: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0550 - val_loss: 0.2437
Epoch 31/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0487Epoch 00030: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0499 - val_loss: 0.2660
Epoch 32/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0522Epoch 00031: val_loss improved from 0.20805 to 0.20593, saving model to ./checkpoints/weights.031-0.2059.hdf5
3072/3021 [==============================] - 71s - loss: 0.0518 - val_loss: 0.2059
Epoch 33/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0492Epoch 00032: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0486 - val_loss: 0.2317
Epoch 34/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0539Epoch 00033: val_loss improved from 0.20593 to 0.17129, saving model to ./checkpoints/weights.033-0.1713.hdf5
3072/3021 [==============================] - 70s - loss: 0.0542 - val_loss: 0.1713
Epoch 35/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0475Epoch 00034: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0474 - val_loss: 0.2162
Epoch 36/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0455Epoch 00035: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.0465 - val_loss: 0.2032
Epoch 37/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0410Epoch 00036: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0404 - val_loss: 0.2118
Epoch 38/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0423Epoch 00037: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0420 - val_loss: 0.1720
Epoch 39/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0413Epoch 00038: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0407 - val_loss: 0.2387
Epoch 40/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0350Epoch 00039: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0348 - val_loss: 0.2889
Epoch 41/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0315Epoch 00040: val_loss did not improve
3072/3021 [==============================] - 69s - loss: 0.0326 - val_loss: 0.2150
Epoch 42/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0361Epoch 00041: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.0357 - val_loss: 0.2293
Epoch 43/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0305Epoch 00042: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0301 - val_loss: 0.1747
Epoch 44/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0385Epoch 00043: val_loss improved from 0.17129 to 0.17055, saving model to ./checkpoints/weights.043-0.1705.hdf5
3072/3021 [==============================] - 70s - loss: 0.0386 - val_loss: 0.1705
Epoch 45/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0363Epoch 00044: val_loss did not improve
Epoch 00044: reducing learning rate to 3.999999898951501e-06.
3047/3021 [==============================] - 69s - loss: 0.0360 - val_loss: 0.2691
Epoch 46/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0322Epoch 00045: val_loss improved from 0.17055 to 0.15367, saving model to ./checkpoints/weights.045-0.1537.hdf5
3072/3021 [==============================] - 70s - loss: 0.0320 - val_loss: 0.1537
Epoch 47/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0282Epoch 00046: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0280 - val_loss: 0.2337
Epoch 48/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0262Epoch 00047: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0263 - val_loss: 0.2304
Epoch 49/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0219Epoch 00048: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0217 - val_loss: 0.2491
Epoch 50/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0351Epoch 00049: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0370 - val_loss: 0.1880
Epoch 51/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0267Epoch 00050: val_loss improved from 0.15367 to 0.13505, saving model to ./checkpoints/weights.050-0.1350.hdf5
3047/3021 [==============================] - 70s - loss: 0.0265 - val_loss: 0.1350
Epoch 52/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0243Epoch 00051: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0257 - val_loss: 0.2410
Epoch 53/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0243Epoch 00052: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0251 - val_loss: 0.2465
Epoch 54/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0250Epoch 00053: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.0262 - val_loss: 0.1908
Epoch 55/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0237Epoch 00054: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0233 - val_loss: 0.3144
Epoch 56/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0200Epoch 00055: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0198 - val_loss: 0.2361
Epoch 57/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0180Epoch 00056: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0184 - val_loss: 0.1965
Epoch 58/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0246Epoch 00057: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0246 - val_loss: 0.1858
Epoch 59/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0248Epoch 00058: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0252 - val_loss: 0.2242
Epoch 60/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0271Epoch 00059: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0270 - val_loss: 0.2275
Epoch 61/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0202Epoch 00060: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0201 - val_loss: 0.1711
Epoch 62/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0244Epoch 00061: val_loss did not improve
Epoch 00061: reducing learning rate to 7.999999979801942e-07.
3072/3021 [==============================] - 70s - loss: 0.0242 - val_loss: 0.1970
Epoch 63/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0244Epoch 00062: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.0242 - val_loss: 0.2559
Epoch 64/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0231Epoch 00063: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0233 - val_loss: 0.2054
Epoch 65/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0206Epoch 00064: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0204 - val_loss: 0.2215
Epoch 66/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0249Epoch 00065: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0249 - val_loss: 0.1982
Epoch 67/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0198Epoch 00066: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0196 - val_loss: 0.2284
Epoch 68/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0235Epoch 00067: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0231 - val_loss: 0.2271
Epoch 69/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0219Epoch 00068: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0216 - val_loss: 0.1576
Epoch 70/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0197Epoch 00069: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0193 - val_loss: 0.1911
Epoch 71/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0241Epoch 00070: val_loss did not improve
3021/3021 [==============================] - 69s - loss: 0.0241 - val_loss: 0.1895
Epoch 72/300
2970/3021 [============================>.] - ETA: 0s - loss: 0.0236Epoch 00071: val_loss did not improve
Epoch 00071: reducing learning rate to 1.600000018697756e-07.
3034/3021 [==============================] - 70s - loss: 0.0232 - val_loss: 0.2577
Epoch 73/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0229Epoch 00072: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0232 - val_loss: 0.1452
Epoch 74/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0228Epoch 00073: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0226 - val_loss: 0.1755
Epoch 75/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0190Epoch 00074: val_loss improved from 0.13505 to 0.10881, saving model to ./checkpoints/weights.074-0.1088.hdf5
3072/3021 [==============================] - 70s - loss: 0.0196 - val_loss: 0.1088
Epoch 76/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0224Epoch 00075: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0230 - val_loss: 0.1868
Epoch 77/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0224Epoch 00076: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0220 - val_loss: 0.1434
Epoch 78/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0238Epoch 00077: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0235 - val_loss: 0.2376
Epoch 79/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0172Epoch 00078: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0170 - val_loss: 0.1615
Epoch 80/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0218Epoch 00079: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.0227 - val_loss: 0.1963
Epoch 81/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0218Epoch 00080: val_loss did not improve
3072/3021 [==============================] - 69s - loss: 0.0228 - val_loss: 0.2624
Epoch 82/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0210Epoch 00081: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0209 - val_loss: 0.1959
Epoch 83/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0222Epoch 00082: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.0224 - val_loss: 0.1929
Epoch 84/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0184Epoch 00083: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0182 - val_loss: 0.1770
Epoch 85/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0184Epoch 00084: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0187 - val_loss: 0.1540
Epoch 86/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0222Epoch 00085: val_loss did not improve
Epoch 00085: reducing learning rate to 3.199999980552093e-08.
3047/3021 [==============================] - 70s - loss: 0.0221 - val_loss: 0.1892
Epoch 87/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0206Epoch 00086: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0203 - val_loss: 0.1861
Epoch 88/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0197Epoch 00087: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0199 - val_loss: 0.1536
Epoch 89/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0202Epoch 00088: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0208 - val_loss: 0.1542
Epoch 90/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0167Epoch 00089: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0165 - val_loss: 0.2076
Epoch 91/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0193Epoch 00090: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0191 - val_loss: 0.2714
Epoch 92/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0205Epoch 00091: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0201 - val_loss: 0.1680
Epoch 93/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0217Epoch 00092: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0213 - val_loss: 0.1884
Epoch 94/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0248Epoch 00093: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0245 - val_loss: 0.1890
Epoch 95/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0213Epoch 00094: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.0212 - val_loss: 0.2831
Epoch 96/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0201Epoch 00095: val_loss did not improve
Epoch 00095: reducing learning rate to 6.399999818995639e-09.
3072/3021 [==============================] - 70s - loss: 0.0198 - val_loss: 0.2074
Epoch 97/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0164Epoch 00096: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0164 - val_loss: 0.2718
Epoch 98/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0218Epoch 00097: val_loss did not improve
3047/3021 [==============================] - 70s - loss: 0.0214 - val_loss: 0.2625
Epoch 99/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0208Epoch 00098: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0209 - val_loss: 0.1456
Epoch 100/300
3008/3021 [============================>.] - ETA: 0s - loss: 0.0216Epoch 00099: val_loss did not improve
3072/3021 [==============================] - 70s - loss: 0.0214 - val_loss: 0.2114
Epoch 101/300
2983/3021 [============================>.] - ETA: 0s - loss: 0.0203Epoch 00100: val_loss did not improve
3047/3021 [==============================] - 69s - loss: 0.0200 - val_loss: 0.1885
Epoch 00100: early stopping